Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Improving machine simultaneous interpretation by punctuation recovery
CHEN Yuna, SHI Xiaodong
Journal of Computer Applications    2020, 40 (4): 972-977.   DOI: 10.11772/j.issn.1001-9081.2019101711
Abstract587)      PDF (1373KB)(482)       Save
In the Machine Simultaneous Interpretation(MSI)pipeline system,semantic incompleteness occurs when the Automatic Speech Recognition(ASR)outputs are directly input into Neural Machine Translation(NMT). To address this problem,a model based on Bidirectional Encoder Representation from Transformers (BERT) and Focal Loss was proposed. Firstly,several segments generated by the ASR system were cached and formed into a string. Then a BERT-based sequence labeling model was used to recover the punctuations of the string,and Focal Loss was used as the loss function in the process of model training to alleviate the class imbalance problem of more unpunctuated samples than punctuated samples. Finally,the punctuation-restored string was input into NMT. Experimental results on English-German and Chinese-English translation show that in term of translation quality,the MSI using the proposed punctuation recovery model has the improvement of 8. 19 BLEU and 4. 24 BLEU respectively compared with the MSI with ASR outputs directly inputting into NMT,and has the improvement of 2. 28 BLEU and 3. 66 BLEU respectively compared with the MSI using punctuation recovery model based on bi-directional recurrent neural network with attention mechanism. Therefore,the proposed model can be effectively applied to MSI.
Reference | Related Articles | Metrics
Single image super resolution algorithm based on structural self-similarity and deformation block feature
XIANG Wen, ZHANG Ling, CHEN Yunhua, JI Qiumin
Journal of Computer Applications    2019, 39 (1): 275-280.   DOI: 10.11772/j.issn.1001-9081.2018061230
Abstract349)      PDF (1016KB)(282)       Save
To solve the problem of insufficient sample resources and poor noise immunity for single image Super Resolution (SR) restoration, a single image super-resolution algorithm based on structural self-similarity and deformation block feature was proposed. Firstly, a scale model was constructed to expand search space as much as possible and overcome the shortcomings of lack of a single image super-resolution training sample. Secondly, the limited internal dictionary size was increased by geometric deformation of sample block. Finally, in order to improve anti-noise performance of reconstructed picture, the group sparse learning dictionary was used to reconstruct image. The experimental results show that compared with the excellent algorithms such as Bicubic, Sparse coding Super Resolution (ScSR) algorithm and Super-Resolution Convolutional Neural Network (SRCNN) algorithm, the super-resolution images with more subjective visual effects and higher objective evaluation can be obtained, the Peak Signal-To-Noise Ratio (PSNR) of the proposed algorithm is increased by about 0.35 dB on average. In addition, the scale of dictionary is expanded and the accuracy of search is increased by means of geometric deformation, and the time consumption of algorithm is averagely reduced by about 80 s.
Reference | Related Articles | Metrics
Single image super resolution combining with structural self-similarity and convolution networks
XIANG Wen, ZHANG Ling, CHEN Yunhua, JI Qiumin
Journal of Computer Applications    2018, 38 (3): 854-858.   DOI: 10.11772/j.issn.1001-9081.2017081920
Abstract390)      PDF (879KB)(515)       Save
Aiming at the ill-posed inverse problem of single-image Super Resolution (SR) restoration, a single image super resolution algorithm combining with structural self-similarity and convolution networks was proposed. Firstly, the self-structure similarity of samples to be reconstructed was obtained by scaling decomposition. Combined with external database samples as training samples, the problem of over-dispersion of samples could be solved. Secondly, the sample was input into a Convolution Neural Network (CNN) for training and learning, and the prior knowledge of the super resolution of the single image was obtained. Then, the optimal dictionary was used to reconstruct the image by using a nonlocal constraint. Finally, an iterative backprojection algorithm was used to further improve the image super resolution effect. The experimental results show that compared with the excellent algorithms such as Bicubic, K-SVD (Singular Value Decomposition of k iterations) algorithm and Super-Resolution Convolution Neural Network (SRCNN) algorithm, the proposed algorithm can get super-resolution reconstruction with clearer edges.
Reference | Related Articles | Metrics
Data augmentation method based on conditional generative adversarial net model
CHEN Wenbing, GUAN Zhengxiong, CHEN Yunjie
Journal of Computer Applications    2018, 38 (11): 3305-3311.   DOI: 10.11772/j.issn.1001-9081.2018051008
Abstract1840)      PDF (1131KB)(1142)       Save
Deep Convolutional Neural Network (CNN) is trained by large-scale labelled datasets. After training, the model can achieve high recognition rate or good classification effect. However, the training of CNN models with smaller-scale datasets usually occurs overfitting. In order to solve this problem, a novel data augmentation method called GMM-CGAN was proposed, which was integrated Gaussian Mixture Model (GMM) and CGAN (Conditional Generative Adversarial Net). Firstly, sample number was increased by randomly sliding sampling around the core region. Secondly, the random noise vector was supposed to submit to the distribution of GMM model, then it was used as the initial input to the CGAN generator and the image label was used as the CGAN condition to train the parameters of the CGAN and GMM models. Finally, the trained CGAN was used to generate a new dataset that matched the real distribution of the samples. The dataset was divided into 12 classes of 386 items. After implementing GMM-CGAN on the dataset, the total number of the new dataset was 38600. The experimental results show that compared with CNN's training datasets augmented by Affine transformation or CGAN, the average classification accuracy of the proposed method is 89.1%, which is improved by 18.2% and 14.1%, respectively.
Reference | Related Articles | Metrics
Link prediction method for complex network based on closeness between nodes
DING Dazhao, CHEN Yunjie, JIN Yanqing, LIU Shuxin
Journal of Computer Applications    2017, 37 (8): 2129-2132.   DOI: 10.11772/j.issn.1001-9081.2017.08.2129
Abstract600)      PDF (734KB)(728)       Save
Many link prediction methods only focus on the standard metric AUC (Area Under receiver operating characteristic Curve), ignoring the metric precision and closeness of common neighbors and endpoints under different topological structures. To solve these problems, a link prediction method based on closeness between nodes was proposed. In order to describe the similarity between endpoints more accurately, the closeness of common neighbors was designed by considering the local topological information around common neighbors, which was adjusted for different networks through a parameter. Empirical study on six real networks show that compared with the similarity indicators such as Common Neighbor (CN), Resource Allocation (RA), Adamic-Adar (AA), Local Path (LP) and Katz, the proposed index can improve the prediction accuracy.
Reference | Related Articles | Metrics
Salient object detection and extraction method based on reciprocal function and spectral residual
CHEN Wenbing, JU Hu, CHEN Yunjie
Journal of Computer Applications    2017, 37 (7): 2071-2077.   DOI: 10.11772/j.issn.1001-9081.2017.07.2071
Abstract519)      PDF (1167KB)(342)       Save
To solve the problems of "center-surround" salient object detection and extraction method, such as incomplete object detected or extracted, not smooth boundary and redundancy caused by down-sampling 9-level pyramid, a salient object detection method based on Reciprocal Function and Spectral Residual (RFSR) was proposed. Firstly, the difference between the intensity image and its corresponding Gaussian low-pass one was used to substitute the normalization of the intensity image under "center-surround" model, meanwhile the level of Gaussian pyramid was further reduced to 6 to avoid redundancy. Secondly, a reciprocal function filter was used to extract local orientation information instead of Gabor filter. Thirdly, spectral residual algorithm was used to extract spectral feature. Finally, three extracted features were properly combined to generate the final saliency map. The experimental results on two mostly common benchmark datasets show that compared with "center-surround" and spectral residual models, the proposed method significantly improves the precision, recall and F-measure, furthermore lays a foundation for subsequent image analysis, object recognition, visual-attention-based image retrieval and so on.
Reference | Related Articles | Metrics
Review on lightweight cryptography suitable for constrained devices
YANG Wei WAN Wunan CHEN Yun ZHANG Yantao
Journal of Computer Applications    2014, 34 (7): 1871-1877.   DOI: 10.11772/j.issn.1001-9081.2014.07.1871
Abstract632)      PDF (1113KB)(788)       Save

With the rapid development of the Internet of Things (IoT), security of constrained devices suffer a serious challenge. LightWeight Cryptography (LWC) as the main security measure of constrained devices is getting more and more attention of researchers. The recent advance in issues of lightweight cryptography such as design strategy, security and performance were reviewed. Firstly, design strategies and the key issues during the design were elaborated, and many aspects such as principle and implementation mechanisms of some typical and common lightweight cryptography were analyzed and discussed. Then not only the commonly used cryptanalysis methods were summarized but also the threat of side channel attacks and the issues should be noted when adding resistant mechanism were emphasized. Furthermore, detailed comparison and analysis of the existing lightweight cryptography from the perspective of the important indicators of the performance of lightweight cryptography were made, and the suitable environments of hardware-oriented and software-oriented lightweight cryptography were given. Finally, some unresolved difficult issues in the current and possible development direction in the future of lightweight cryptography research were pointed out. Considering characteristics of lightweight cryptography and its application environment, comprehensive assessment of security and performance will be the issues which worth depth researching in the future.

Reference | Related Articles | Metrics
Survey of influence in social networks
XIA tao CHEN Yunfang ZHANG Wei LU Youwei
Journal of Computer Applications    2014, 34 (4): 980-985.   DOI: 10.11772/j.issn.1001-9081.2014.04.0980
Abstract430)      PDF (994KB)(584)       Save

In the field of social influence propagation, social network as the media plays a fundamental role in interaction between social individuals and disseminating information or views. First, the definition of social influence and the essential attribute of social influences as the social relevance were discussed. Then, the independent cascade model and the linear threshold model were expounded, as well as greedy algorithm and heuristic algorithms which can confirm the influential people. Finally, the new trend of research on social influence, such as community-based influence maximization algorithm and research on the influence of multiple subjects and multiple behaviors were deeply analyzed.

Reference | Related Articles | Metrics
Tone mapping algorithm based on multi-scale decomposition
HU Qingxin CHEN Yun FANG Jing
Journal of Computer Applications    2014, 34 (3): 785-789.   DOI: 10.11772/j.issn.1001-9081.2014.03.0785
Abstract638)      PDF (1008KB)(425)       Save

A new Tone Mapping (TM) algorithm based on multi-scale decomposition was proposed to solve a High Dynamic Range (HDR) image displayed on an ordinary display device. The algorithm decomposed a HDR image into multiple scales using a Local Edge-Preserving (LEP) filter to smooth the details of the image effectively, while still retaining the salient edges. Then a dynamic range compression function with parameters was proposed according to the characteristics of the decomposed layers and the request of compression. By changing the parameters, the coarse scale layer was compressed and the fine scale layer was boosted, which resulted in compressing the dynamic range of the image and boosting the details. Finally, by restructuring the image and restoring the color, the image after mapping had a good visual quality. The experimental results demonstrate that the proposed method is better than the algorithm proposed by Gu et al.(GU B, LI W J, ZHU M Y, et al. Local edge-preserving multiscale decomposition for high dynamic range image tone mapping [J]. IEEE Transactions on Image Processing, 2013, 22(1): 70-79) and Yeganeh et al. (YEGANEH H, WANG Z. Objective quality assessment of tone-mapped images [J]. IEEE Transactions on Image Processing, 2013, 22(2): 657-667) in naturalness, structural fidelity and quality assessment; moreover, it avoids the halo artifacts which is a common problem existing in the local tone mapping algorithms. The algorithm can be used for the tone mapping of the HDR image.

Related Articles | Metrics
Improved CenSurE detector and a new rapid descriptor based on gradient of summed image patch
Fang CHEN Yun-liang JIANG Yun-xi XU
Journal of Computer Applications    2011, 31 (07): 1818-1821.   DOI: 10.3724/SP.J.1087.2011.01818
Abstract1496)      PDF (766KB)(880)       Save
This paper proposed a new, real-time and robust local feature and descriptor, which can be applied to computer vision field with high demands in real-time. Since CenSurE has extremely efficient computation,it has got wide attention. Due to its linear scales, the filter response signal is very sparse and cannot acquire high repeatability. Therefore, this paper modified the detector using logarithmic scale sampling, and obtained better performance. The new rapid descriptor was based on gradient of the summed image patch, called GSIP. The GSIP descriptor has superior performance. An extensive experimental evaluation was performed to show that the GSIP descriptor increases the distinctiveness of local image descriptors for image region matching and object recognition compared with the state-of-the-art SURF descriptor. Furthermore, compared with SURF, GSIP achieves a two-fold speed increase.
Reference | Related Articles | Metrics
JING Xiao-ning,LI Quan-tong,CHEN Yun-xiang,LV Zhen-zhong
Journal of Computer Applications    2005, 25 (02): 417-419.   DOI: 10.3724/SP.J.1087.2005.0417
Abstract960)      PDF (104KB)(1055)       Save
Directed towards the test sequencing problem in the sequential fault diagnosis for large scale systems, an algorithm based on information entropy for design the fault diagnosis strategy with least test cost was presented. The algorithm requires less computation than the traditional methods, and uses the test results, test cost, and fault probabilities efficiently. This algorithm is suitable for on-line or off-line diagnosis and maintenance process. The design process of the algorithm was presented, and an example was used to illustrate the validity of the method.
Related Articles | Metrics